The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  Dr. Jeffrey W. Rosky: PCSOT is "correctional quackery"

Post New Topic  Post A Reply
profile | register | preferences | faq | search

next newest topic | next oldest topic
Author Topic:   Dr. Jeffrey W. Rosky: PCSOT is "correctional quackery"
Dan Mangan
Member
posted 10-02-2012 08:41 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
http://www.youtube.com/watch?v=BybrnNyAkHg

res ipsa loquitur

IP: Logged

clambrecht
Member
posted 10-02-2012 10:58 PM     Click Here to See the Profile for clambrecht   Click Here to Email clambrecht     Edit/Delete Message
Hmmmmm.....


Polygraph est optimus methodi praesto ad determinandum veritas

[This message has been edited by clambrecht (edited 10-02-2012).]

IP: Logged

rnelson
Member
posted 10-02-2012 11:21 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Rosky has also just published a hard hitting article on this same content in the ATSA journal.

He sends us a strong message to account for ourselves in terms of outcomes, but also anchors himself to the NRC report for info about accuracy. He still emphasizes fear as the basis of response, and selectively dismantles arguments in favor of the polygraph. He also misses the point around outcomes with an overemphasis on sexual recidivism as the only outcome he examines.

What he does correctly is refuse to be impressed with uncertainty - with research that seems like it might be favorable but is impaired by confounds that could alternatively account for the results.

You can hear the trend in the questions - "are professionals simply trained to believe the polygraph works to reduce recidivism?"

Rosky is correct that professional quackery is in part defined by a refusal to consider evidence that goes contrary to or does not support what we want to say. Refusal to modify or improve our hypothesis and theories in response to new information and new evidence is undermining our credibility and fueling the arguments of our critics.

Rosky is an associate professor who has been in contact with a number of systems around the country. He will continue to publish and try to make a name for himself. It will be best to be prepared for him...

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 10-03-2012 07:58 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
From the APA site's page on validity:

Polygraph techniques in which multiple issues were encompassed by the relevant questions produced an aggregated decision accuracy of 85% (confidence interval 77% - 93%) with an inconclusive rate of 13%.

Do these figures apply to PCSOT and LEPET?

If so, where are the studies? (You can skip the PCSOT one from the UK that relied on self reporting.) If not, why wasn't this critical distinction made clear?

[This message has been edited by Dan Mangan (edited 10-03-2012).]

IP: Logged

rnelson
Member
posted 10-04-2012 03:00 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
The meta-analysis was conducted with the assumption (as Cleve Backster would say) of ideally formulated relevant questions. That is, we made no attempt to differentiate or study the effectiveness of different types of questions. Instead, we assumed that questions were (mostly) adequate to prompt the examinee to respond to due to some combination of emotion, cognition, and behaviorally conditioned experience regarding the target stimuli.

We could also assume there is a normal degree of imperfection in the questions used in the included studies. The bottom line is this: ain't nothin' perfect in life. If we do nothing until we can control all variable then we (and everyeone else) would do... nothing. So, we're not done, and there is obviously more to learn later.

Research can be conducted at a variety of levels. One of which would be to investigate the effectiveness of different types of exams (ie. PCSOT, law-enforcement applicant, information security, criminal investigation, etc.)

The hypothesis would be that different types of topics or targets would produce different rates of accuracy. It is an interesting hypothesis - one that should someday be studied. However, studying this will require that someone articulate a credible hypothesis why different topics or target questions would produce different results. In the past we have engaged in all kinds of amateur psychologizing (read: mind-reading) - suggesting that some questions are too hot, or too emotional (which suggests that the polygraph measures temperature or emotion).

What the premise really seems to say is that examinees may respond too much to the stimuli. This, of course, begs the question: how much is too much response? And how much response is enough response? The real problem is that to answer these questions we would have to know in advance of the test how much we want someone to respond - we would essentially have to know in advance whether they were guilty or innocent.

It will be better to simply present the stimulus questions according to reasonable rules and principles based on what we actually know about psychology and physiological responding. Without any mind-reading.

There is ample evidence that the polygraph does not measure either temperature or emotion - and the emerging base of scientific evidence that we do have seems not to support the notion that different topics/questions perform very differently. Now, certainly there will be differences somewhere, but they may or may not be related to the domains of law-enforcement applicant, PCSOT, information security, or criminal investigation. Differences in question effectiveness may have more to do with general principles that are applicable to all domains of testing.

One example: in 2007 we showed a poster presentation on OSS-3 at the APA conference. OSS-3 results were shown with three samples of screening exams, for which the sampling methodology was that same - suboptimal criterion established by a deceptive test result coupled with a confession to a test question, or truthful test result that was supporte by two different QC examiners. One LEPET sample. One PCSOT maintenance sample. One PCSOT sex history sample. Resulting accuracy: essentially the same for all three samples.

The actual studies in the meta-analysis are too many for me to remember, but they are described in detail in the complete report.

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

All times are PT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | The Polygraph Place

Copyright 1999-2012. WordNet Solutions Inc. All Rights Reserved

Powered by: Ultimate Bulletin Board, Version 5.39c
© Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.